Scale-imprecision space
نویسنده
چکیده
It is argued that image measurements should satisfy two requirements of physical plausibility: the measurements are of non-zero scale and non-zero imprecision; and two required invariances, nothing is lost by expanding the image and nothing is lost by increasing the contrast of the image. A model of measurements satisfying these constraints, based on blurring the graph of the incident luminance, is described. Within this framework, several types of filtering can be expressed: mean filtering (ordinary scale space); median filtering; and mode filtering. It is found that of these possibilities. a system based on mode filtering produces interesting results. In particular. edges-defined as discontinuities-naturally appear, and their behaviour over scale and imprecision is presented. K~~~wwt/.~: Measurement theory: Edge detection; Visual invariances: Median filtering 1. A conceptual justification There is no doubting the extraordinary effectiveness of mammalian visual systems. While empirical studies progressively reveal how this performance is achieved with neural hardware, it is also possible to progress more theoretically. One such a strategy is to assume US premise that the functional design is as near to perfect as is possible. No doubt one day this assumption will be shown to be false, but in the interim this position provides a sturdy foothold from which to attack. To use this approach one starts with a model that is of ideal performance, and then in incremental steps one modifies it in the direction of physical plausibility. The recognition that the earliest measurements of the retina1 irradiance must be achieved with apertures of non-zero urea, which leads to the idea of scale space [l], is an example of this approach in action. In this paper I will explore the consequences of the fact that these same initial measurements cannot be of unlimited precision. The first task of the visual system is to measure the * E-mail: I.d.griffinfrr aston.ac.uk 0262~8X56/97.‘$17.00 c 1997 Elsevier Science B.V. All rights reserved PI1 S0’63-8X56(96)1 139-O retinal irradiance. If one ignores the quantized nature of light, then the measurements of an ideal visual system would produce an everywhere-defined real-valued function over a continuous two-dimensional domain. However. complete determination of this function using physically plausible measurements is not possible for several reasons: neither arbitrarily fine spatial detail nor arbitrarily small differences in scalar magnitude can be resolved; the image can only be sampled at a finite number of locations, etc. Taking account of these limitations is not easy, particularly while maintaining a tractable mathematical model. and an opportunistic strategy is reasonable. The impossibility of resolving arbitrarily fine spatial detail has been dealt with by the notion of scale space. In this paper, I describe how this constraint may be addressed simultaneously with the impossibility of resolving arbitrarily small differences in scalar magnitude. Rather than modify the scale space theory, I will build the new theory from scratch and then show how it includes scale space. An idealized device for measuring the value of a scalar field at a point would be something like a sharp .semor connected to a readout device consisting of a slender pointerthat roves along a copy of the real line. The sensor is placed into the scalar field and the pointer registers the value measured by the sensor by taking up a location on the real line. In the idealized device, the .sensor Ims an ir#hitr(l dwwp tip and the pointer (1 wnishing \l,itlth. To 310 Lewis D. Griffin/Image and Vision Computing 1.5 (1997) 369-398 modify this picture in the direction of physical plausibility, the sensor tip must be admitted to be of non-zero area (i.e. blunt) and the pointer of non-zero width. Note that these two modifications are related to, but distinct from, the image processing terms spatial resolution and grey-level quantization. Resolution and quantization are to do with the density of sampling of, respectively, space and value (i.e. pixels and grey-levels); and while this is crucial for physical plausibility, these issues do not concern me here’. I shall start with the modification of the sensor tip from zero to non-zero area. The challenge is to modify the concept of a zero area sensor to produce the concept of a non-zero area sensor while retaining its nature as a ‘point’ operator. One can be optimistic about the possibility of success, since Huntingdon [2] has shown how a consistent and unique geometry can be constructed out of the spheres whose radii are not less than a certain value-‘a perfectly rigorous geometry in which the ‘points’, like the schoolmaster’s chalk-marks on the blackboard, are of definite, finite size, and the ‘lines’ and ‘planes’ of definite finite thickness’. At first blush one might imagine that changing to a sensor with an aperture of non-zero area would not necessitate modifying the read-out device (i.e. the real line and pointer); the intuition is that if we are to be able to treat the non-zero area sensor like a non-zero area point, just a single number should be measured as it was for a zero-sized sensor. This is too hasty. Euclid, according to Heath’s translation [3], characterized a point as ‘that which has no part’; and this is all that is required for the characterization of point within the subject matter of geometry. If this is accepted, then when the concept of a point is replaced with that of a point operator (as Koenderink [4] recommends) it is only obligatory to retain this property; so all that is required is that it is not possible to discern any structure within the point by examination of the output of the operator. Thus, there is nothing wrong with considering the blunt sensor as being composed of an infinite collection of infinitely sharp sensors, as long as the ‘wires’ from the sharp sensors are not labelled as to their location within the blunt tip. So, when the change is made to a blunt sensor, an infinite collection of pointers taking up positions on the real line read-out is required, one for each zero area point in the sensor tip! This is easier to imagine if the metaphor is changed midstream by replacing the pointers of the read-out device with pale shadows, so that one may more easily imagine them superimposing and adding. Then it becomes clear ’ It is essential to distinguish scale and resolution. One reason for the common confusion of the two is that in many imaging systems (e.g. CCD cameras) the scale and the sampling density are intimately linked because apertures do not overlap. This does not need to be the case, though. For example, in mammalian visual systems, although receptors do not overlap, receptive fields (groups of receptors) do. that the infinite collection of shadow pointers associated with a blunt sensor depicts the histogram of the luminance values within the sensor aperture; this is what remains when all spatial information is discarded. Now I consider modifying the pointer, without modifying the sensor. This is more straightforward: one just uses a pointer with width, no further modifications are entailed. This is a convenient point to distinguish imprecision from error. I will be using imprecision to indicate the pointer width, i.e. the imprecision is small when the pointer is narrow and large when the pointer is wide. To clarify this, consider a simple measurement method that measures, say, the density of a liquid. It is well known that even if instrumental, personal and systematic errors are eliminated a random error will always persist [5]. What this means in practice is that if the method is repeatedly applied to the same sample of liquid, different results will be obtained. To calibrate the technique one can take a large (preferably infinite) number of measurements of the density of a liquid of already known density. If these results are plotted, it will be found that they form a distribution centred on the true value (the Central Limit Theorem hints at why this is often well modelled as a normal distribution). Some measure of the width of this distribution is defined to be the imprecision of the method. One can now imagine applying the method to measure the density of an unknown liquid. If it is applied repeatedly, different readings will be obtained, which again one can plot. The familiar shape of the already determined distribution will gradually re-appear, but since there is only a finite number of samples there will be some uncertainty as to the location of the centre of the distribution. This uncertainty is lessened with more samples but only eliminated with an infinite number of readings, which would allow the full distribution to be recovered. The uncertainty in position of the centre of the distribution is the error of the reading. The measurements that I am discussing have no concept of error, only of imprecision (so there is still a long way to go for true physical plausibility); they are, as it were, like the final report from an infinite set of measurements. Thus, the modification from a vanishingly thin pointer to a pointer with width is unimportant as long as the sensor still has zero area-one simple looks at the centre of the pointer ignoring its width-however, as I will explain, it does become important once the sensor is modified to have non-zero area. From this point on I will use the standard term scale to refer to the aperture area. If both modifications are put together, blunt sensor and wide pointers, the readout device becomes a superimposition of an infinite collection of wide pale shadow pointers. As with zero imprecision, this can be interpreted as a histogram of luminance values within the sensor area; but, crucially, this time it is a blurred version of the true histogram. Since the shadows are unlabelled (or uncoloured, say) they cannot be disentangled once Lewis D. Gr#in!Image and Vision Computing I_5 ( 19971 369-398 371 they start to overlap. So one cannot use the trick of noting the position of the centres of the shadows. This shows how non-zero imprecision becomes important if one must ‘calculate’ with such ‘numbers’ before a chance is had to note their midpoints. Now I return to ideal measurements, and ask how they look within this new framework that accommodates non-zero scale and non-zero imprecision measurements. They look the same as before: the real line readout is marked with a single, vanishingly thin and infinitely dark shadow at some location. However, whereas before it seemed that this was the proper state of affairs and blurred histograms a curiosity, from the new position such readouts may be seen as an exceptional form of the more general blurred histogram output; exceptional in the sense that the histogram is zero-valued everywhere apart from at a single value (i.e. the histogram is a delta function). Having characterized the nature of single measurements, I now consider what the output of an infinite collection of such measurements, one at each location in the field, looks like. This is easiest to do for a 1-D image (see Fig. 1). This figure depicts the (scalar) value axis of each measurement as extending vertically and the single spatial axis of the field extending horizontally. This gives a function on a 2-D position x value space, which I shall call a measurement function. It becomes clear what this is when the output of ideal measurements is considered: the zero-scale and zero-imprecision meusurement .function is the graph qf the luminance ,function. An ideal measurement function and four examples of measurement functions of non-zero scale and non-zero Increasing . Imprecision ’ Four physically plausible measurement functions
منابع مشابه
Metalinguistic Comparison in an Alternative Semantics for Imprecision
This paper offers an analysis of metalinguistic comparatives such as more dumb than crazy in which they differ from ordinary comparatives in the scale on which they compare: ordinary comparatives use scales lexically determined by particular adjectives, but metalinguistic ones use a generallyavailable scale of imprecision or ‘pragmatic slack’. To implement this idea, I propose a novel compositi...
متن کاملInteger Polyhedra for Program Analysis
Polyhedra are widely used in model checking and abstract interpretation. Polyhedral analysis is effective when the relationships between variables are linear, but suffers from imprecision when it is necessary to take into account the integrality of the represented space. Imprecision also arises when non-linear constraints occur. Moreover, in terms of tractability, even a space defined by linear...
متن کاملDelicacy, Imprecision, and Uncertainty of Oceanic Simulations: An Investigation with the Regional Oceanic Modeling System (ROMS)
In this project our long-term goal is to determine the ways and degrees to which realistically complex oceanic and atmospheric simulation models have an irreducible imprecision, hence an irreducible uncertainty in their analysis and forecast products. This goal is a natural accompaniment to the goal of continuing the evolution of the Regional Oceanic Modeling System (ROMS) as a multi-scale, mul...
متن کاملPeak-fitting and integration imprecision in the Aerodyne aerosol mass spectrometer: effects of mass accuracy on location-constrained fits Citation
The errors inherent in the fitting and integration of the pseudo-Gaussian ion peaks in Aerodyne high-resolution aerosol mass spectrometers (HR-AMSs) have not been previously addressed as a source of imprecision for these or similar instruments. This manuscript evaluates the significance of this imprecision and proposes a method for their estimation in routine data analysis. In the first part of...
متن کاملIrreducible imprecision in atmospheric and oceanic simulations.
Atmospheric and oceanic computational simulation models often successfully depict chaotic space-time patterns, flow phenomena, dynamical balances, and equilibrium distributions that mimic nature. This success is accomplished through necessary but non-unique choices for discrete algorithms, parameterizations, and coupled contributing processes that introduce structural instability into the model...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Image Vision Comput.
دوره 15 شماره
صفحات -
تاریخ انتشار 1997